2.2 Configuring multipath I/O
Hardware vendors typically supply a Device Specific Module (DSM) for SAN hardware and software for configuring multipath I/O. That said, the Multipath I/O feature includes the Microsoft DSM and some basic configuration options. The Microsoft DSM supports the Active/Active controller model as well as the asymmetric logical unit access controller model. It also implements path selection policies failover, failback, and load balancing. Failover
policies allow you to configure a secondary path that should be used if
a preferred path fails. If you want the preferred path to be used
automatically when it becomes operational again, you can configure a
failback policy.
Several types of load-balancing policies are available, including round-robin, dynamic least queue depth, and weighted
path. With round-robin, you can configure the DSM to use all available
I/O paths in a balanced, round-robin fashion. With dynamic least queue
depth, you can configure the DSM to route I/O to the path with the least
number of outstanding requests. With weighted path, you assign each
path a weight to indicate its relative priority with regard to a
particular application, and the DSM selects the path with the least
weight among the available paths.
Devices that support the Active/Active controller model are referred to as Active/Active devices
and, by default, are configured to use round-robin. Generally, devices
that support the asymmetric logical unit access (ALUA) controller model
are compliant with the SCSI Primary Commands-3 (SPC-3) standard or later
and, by default, are configured to use failover.
You manage the multipath I/O (MPIO)
configuration using the MPIO Properties dialog box, the Mpclaim
command-line tool, or the cmdlets of the MPIO module in PowerShell.
After you install the Multipath I/O feature using the Add Roles And
Features Wizard, these tools are available on the server. You open the
MPIO Properties dialog box, shown in Figure 6, by selecting MPIO on the Tools menu in Server Manager.
Note
You can get a list of the available cmdlets for working with MPIO by typing get-command -module mpio at a PowerShell prompt.
After you enable MPIO, you might also want to do the following:
-
Enable automatic claiming of iSCSI devices for MPIO.
-
Set the default load-balance policy.
-
Set the Windows disk timeout.
For MPIO to manage a device, you must first add the hardware ID for
the device to MPIO. You can add devices either manually or
automatically.
Automatic claiming of iSCSI devices allows MPIO
to configure available iSCSI devices with multiple paths automatically.
Enable this feature by entering the following at an elevated,
PowerShell prompt:
Enable-MSDSMAutomaticClaim -BusType iSCSI
Load balancing and fault tolerance are core features of MPIO. You set the default load-balancing policy using Get-MSDSMGlobalDefaultLoadBalancePolicy. The default policies available are
-
Fail over only, which allows one active path, with all other paths designated as standby paths for failover. Use the value FOO.
-
Round robin, which sets all available paths to be load-balanced using a round-robin technique. Use the value RR.
-
Least queue path, which load-balances by sending I/O to the path with the fewest I/O requests. Use the value LQD.
-
Least blocks, which load-balances by sending I/O to the path with the
least number of data blocks currently being processed. Use the value
LB.
Set the default load-balancing policy by entering the following command at an elevated PowerShell prompt:
Get-MSDSMGlobalDefaultLoadBalancePolicy -Policy PolicyValue
Here, PolicyValue is one of the accepted policy values—either FOO, RR, LQD, LB, or NONE.
You set the timeout value for new disks using Set-MPIOSetting. The basic syntax is
Set-MPIOSetting -NewDiskTimeout NumSeconds
Here, NumSeconds is the number of seconds to wait before reaching the timeout.
Set-MPIOSetting accepts other parameters as well:
-
–PathVerifyEnabled When set to –PathVerifyEnabled $true, path verification by MPIO is enabled on all paths according to –PathVerificationPeriod. By default, this feature is disabled.
-
–PathVerificationPeriod When –PathVerifyEnabled is set to $true, this parameter sets the interval for path verification. For example, use –PathVerificationPeriod to verify MPIO on all paths every 60 seconds. The default value is every 30 seconds.
-
–PDORemovePeriod
Controls the amount of time (in seconds) that a multipath pseudo-LUN
will remain in system memory, even after losing all paths to the device.
When the removal period is exceeded, all pending I/O operations are
stopped and set as failed, and the failure is passed on to applications.
The default value is 20 seconds.
-
–RetryCount Controls the number of times a failed I/O is retried. The default value is 3.
-
–RetryInterval Sets the number of seconds to wait before retrying a failed I/O. The default is 1 second.
Before you change MPIO settings, you should determine what the
current settings are. You can do this by entering Get-MPIOSetting at the
PowerShell prompt.
Adding and removing multipath hardware devices
You manually add devices to MPIO using the MPIO Properties dialog
box, which is opened by selecting MPIO on the Tools menu in Server
Manager. To manually configure a device to use multipath I/O, follow these steps:
-
Open the MPIO Properties dialog box. On the MPIO Devices tab, you’ll
see a list of currently configured multipath devices. If the device you
want to work with is not listed, tap or click Add.
-
In the Add MPIO Support dialog box, type the vendor ID as an
eight-character string followed by the product ID for the device as a
16-character string. Tap or click OK.
-
You are prompted to restart the server to complete the operation. Tap or click Yes to restart the server.
At an elevated command prompt, you can use Mpclaim to configure
devices to use multipath I/O as well. The basic syntax for installing a
device follows:
Mpclaim -r -i [-a | -c | -d DeviceId
]
The –r parameter
indicates that you want to restart the server to allow the device
installation to be completed. Although you can suppress the restart
using the –n parameter instead of –r, the device will not be installed and available for use until you restart the server. Use the –a parameter to configure multipath I/O support for all compatible devices. Use the –c parameter to configure multipath I/O support for all SPC-3-compliant devices. Use the –d
parameter followed by a device’s hardware ID to install a specific
hardware device. The hardware ID of a device includes the vendor ID as
an eight-character string followed by the product ID for the device as a
16-character string. In the following example, you install a device
with EMSVendo0000234767834215 as the hardware ID:
Mpclaim -r -i -d EMSVendo0000234767834215
Alternatively, you can use Get-MSDSMSupportedHw to list available devices by their hardware ID and New-MSDSMSupportedHw to add a device to MPIO.
Using the MPIO Properties dialog box, you can remove a device from MPIO by following these steps:
-
Open the MPIO Properties dialog box. On the MPIO Devices tab, you’ll see a list of currently configured multipath devices.
-
Select the device that should no longer use multiple path IO and then tap or click Remove.
-
You are prompted to restart the server to complete the operation. Tap or click Yes to restart the server.
At an elevated command prompt, you can use Mpclaim to uninstall multipath I/O for a device as well. The basic syntax for installing a device follows:
Mpclaim -r -u [-a | -c | -d DeviceId
]
Except for the –u
parameter for uninstalling a device, the other parameters are the same
as when you are installing MPIO for a device. The following example
uninstalls the previously installed device:
Mpclaim -r -u -d EMSVendo0000234767834215
Alternatively, you can use Get-MSDSMSupportedHw to list available devices by their hardware ID and Remove-MSDSMSupportedHw to remove a device from MPIO.
Managing and maintaining MPIO
The MPIO Properties dialog box has several other tabs that you can use for general management of MPIO:
-
Discover Multi-Paths
When you select the Discover Multi-Paths tab, Windows runs a discovery
algorithm to examine added device instances and determine if multiple
instances represent the same LUN through different paths. Available
multipath devices are then listed by their hardware ID. The hardware ID
combines a vendor’s name and a product string that matches a device ID
that is maintained by MPIO. Tap or click Add to add hardware IDs for
Fibre Channel devices that use Microsoft DSM.
-
DSM Install Use
the options on this tab to install DSMs provided by independent hardware
vendors (IHVs). Keep in mind that many SPC-3-compliant storage arrays can use the Microsoft DSM and you might not need to install an IHV DSM.
-
Configure Snapshot
Use the options on this tab to save the current MPIO configuration to a log file. Because the log includes details about the DSM, paths, and path states, you can use this information for troubleshooting.
You configure the load-balancing policy for LUNs using their disk
properties. In Computer Management, select Disk Management and then
press and hold or right-click the disk you want to work with. In the
Properties dialog box, click on the MPIO tab. Use the Select MPIO Policy
list to choose the load-balancing policy for the selected disk. If you
use Failover Only as the load-balancing policy, you can configure a
preferred path to the storage. This path is used for automatic failback.
Whether you are working with internal or external disks, you should
follow the same basic principles to help ensure that the chosen storage
solutions meet your performance, capacity, and availability
requirements. Storage performance is primarily a factor of the disk’s access time (how long it takes to register a request and scan the disk), seek time (how long it takes to find the requested data), and transfer
rate (how long it takes to read and write data). Storage capacity
relates to how much information you can store on a volume or logical
disk.
Although early NTFS implementations limited the maximum volume size and file
size limit to 32 GBs, later implementations extended these limits. This
means you can have a maximum NTFS volume size of 256 TBs minus 64 KBs
when you are using 64-KB clusters, and 16 TBs minus 4 KBs when you are
using 4-KB clusters. The maximum file size on an NTFS volume is 16 TBs
minus 64 KBs. Further, a maximum of 4,294,967,294 files can be created
on each volume, and a single server can manage hundreds of volumes
(theoretically, around 2000).
Storage availability relates to fault tolerance. You ensure availability for essential applications and services by using availability
technologies. If a server has a problem or a particular application or
service fails, you have a way to continue operations by failing over to
another server. In addition to clusters, you can help ensure availability
by saving redundant copies of data, keeping spare parts, and if
possible making standby servers available. At the disk and data levels, availability
is enhanced by using redundant array of independent disks (RAID)
technologies. RAID allows you to combine disks and to improve fault
tolerance.
RAID can be implemented in hardware or software. When servers have hardware
RAID controllers installed, the internal controller can be used to
implement RAID on the server’s internal disks. When a server is
allocated storage from a storage array, one or more logical unit numbers, or LUNs,
are assigned. Each LUN is a virtual disk. Typically, hardware RAID
configured within the storage array is used to spread the LUN across
multiple physical disks (also called spindles).
Windows Server 2012 supports several software RAID options, including traditional software-based RAID and Storage
Spaces. Traditional software RAID is the software-based RAID technology
built into the operating system and available in earlier releases of
Windows. Storage Spaces provide resilient storage using new technologies
and are preferred over traditional software RAID. However, each of
these software-implemented RAID levels requires processing power and
memory resources to maintain. By using hardware RAID, you use separate
hardware controllers (RAID controllers) to maintain the disk arrays.
Although this requires the purchase of additional hardware, it takes the
burden off the server and can improve performance.
Why? In a hardware-implemented RAID system, a server’s processing power
and memory aren’t used to maintain the disk arrays. Instead, the
hardware RAID controller (which is installed internally or in a storage
array) handles all the necessary processing tasks.
The RAID levels available with a hardware implementation depend on
the hardware controller/storage array and the vendor’s implementation of
RAID technologies. Some hardware RAID configurations include RAID 0 (disk striping), RAID 1 (disk mirroring), RAID 0+1 (disk striping with mirroring), RAID 5 (disk striping with parity), and RAID 5+1 (disk striping with parity plus mirroring). Table 1 provides a summary of these RAID technologies. The table entries are organized listing the highest RAID level to the lowest.
Table 1. Hardware RAID configurations for clusters
RAID Level |
RAID Type |
RAID Description |
Advantages and Disadvantages |
5+1 |
Disk striping with parity plus mirroring |
Uses at least six volumes, with each one on a separate drive. Each
volume is configured identically as a mirrored striped set with parity
error checking. |
Provides a high level of fault tolerance, but has a lot of overhead. |
5 |
Disk striping with parity |
Uses at least three volumes, with each one on a separate drive. Each
volume is configured as a striped set with parity error checking. In the
case of failure, data can be recovered. |
Provides fault tolerance with less overhead than mirroring. It has better read performance than disk mirroring. |
1 |
Disk mirroring |
Uses two volumes on two drives. The drives are configured
identically, and data is written to both drives. If one drive fails,
there is no data loss because the other drive contains the data. This
approach does not include disk striping. |
Provides redundancy with better write performance than disk striping with parity. |
0+1 |
Disk striping with mirroring |
Uses two or more volumes, with each one on a separate drive. The
volumes are striped and mirrored. Data is written sequentially to drives
that are identically configured. |
Provides redundancy with good read and write performance. |
0 |
Disk striping |
Uses two or more volumes, with each one on a separate drive. Volumes
are configured as a striped set. Data is broken into blocks, called
stripes, and then written sequentially to all drives in the striped set. |
Provides speed and performance without data protection. |